Autonomous systems not only need to understand their current environment, but should also be able to predict future actions conditioned on past states, for instance based on captured camera frames. However, existing models mainly focus on forecasting future video frames for short time-horizons, hence being of limited use for long-term action planning. We propose Multi-Scale Hierarchical Prediction (MSPred), a novel video prediction model able to simultaneously forecast future possible outcomes of different levels of granularity at different spatio-temporal scales. By combining spatial and temporal downsampling, MSPred efficiently predicts abstract representations such as human poses or locations over long time horizons, while still maintaining a competitive performance for video frame prediction. In our experiments, we demonstrate that MSPred accurately predicts future video frames as well as high-level representations (e.g. keypoints or semantics) on bin-picking and action recognition datasets, while consistently outperforming popular approaches for future frame prediction. Furthermore, we ablate different modules and design choices in MSPred, experimentally validating that combining features of different spatial and temporal granularity leads to a superior performance. Code and models to reproduce our experiments can be found in https://github.com/AIS-Bonn/MSPred.
translated by 谷歌翻译
We seek methods to model, control, and analyze robot teams performing environmental monitoring tasks. During environmental monitoring, the goal is to have teams of robots collect various data throughout a fixed region for extended periods of time. Standard bottom-up task assignment methods do not scale as the number of robots and task locations increases and require computationally expensive replanning. Alternatively, top-down methods have been used to combat computational complexity, but most have been limited to the analysis of methods which focus on transition times between tasks. In this work, we study a class of nonlinear macroscopic models which we use to control a time-varying distribution of robots performing different tasks throughout an environment. Our proposed ensemble model and control maintains desired time-varying populations of robots by leveraging naturally occurring interactions between robots performing tasks. We validate our approach at multiple fidelity levels including experimental results, suggesting the effectiveness of our approach to perform environmental monitoring.
translated by 谷歌翻译
Any strategy used to distribute a robot ensemble over a set of sequential tasks is subject to inaccuracy due to robot-level uncertainties and environmental influences on the robots' behavior. We approach the problem of inaccuracy during task allocation by modeling and controlling the overall ensemble behavior. Our model represents the allocation problem as a stochastic jump process and we regulate the mean and variance of such a process. The main contributions of this paper are: Establishing a structure for the transition rates of the equivalent stochastic jump process and formally showing that this approach leads to decoupled parameters that allow us to adjust the first- and second-order moments of the ensemble distribution over tasks, which gives the flexibility to decrease the variance in the desired final distribution. This allows us to directly shape the impact of uncertainties on the group allocation over tasks. We introduce a detailed procedure to design the gains to achieve the desired mean and show how the additional parameters impact the covariance matrix, which is directly associated with the degree of task allocation precision. Our simulation and experimental results illustrate the successful control of several robot ensembles during task allocation.
translated by 谷歌翻译
This paper focuses on the broadcast of information on robot networks with stochastic network interconnection topologies. Problematic communication networks are almost unavoidable in areas where we wish to deploy multi-robotic systems, usually due to a lack of environmental consistency, accessibility, and structure. We tackle this problem by modeling the broadcast of information in a multi-robot communication network as a stochastic process with random arrival times, which can be produced by irregular robot movements, wireless attenuation, and other environmental factors. Using this model, we provide and analyze a receding horizon control strategy to control the statistics of the information broadcast. The resulting strategy compels the robots to re-direct their communication resources to different neighbors according to the current propagation process to fulfill global broadcast requirements. Based on this method, we provide an approach to compute the expected time to broadcast the message to all nodes. Numerical examples are provided to illustrate the results.
translated by 谷歌翻译
Transfer operators offer linear representations and global, physically meaningful features of nonlinear dynamical systems. Discovering transfer operators, such as the Koopman operator, require careful crafted dictionaries of observables, acting on states of the dynamical system. This is ad hoc and requires the full dataset for evaluation. In this paper, we offer an optimization scheme to allow joint learning of the observables and Koopman operator with online data. Our results show we are able to reconstruct the evolution and represent the global features of complex dynamical systems.
translated by 谷歌翻译
We discover a robust self-supervised strategy tailored towards molecular representations for generative masked language models through a series of tailored, in-depth ablations. Using this pre-training strategy, we train BARTSmiles, a BART-like model with an order of magnitude more compute than previous self-supervised molecular representations. In-depth evaluations show that BARTSmiles consistently outperforms other self-supervised representations across classification, regression, and generation tasks setting a new state-of-the-art on 11 tasks. We then quantitatively show that when applied to the molecular domain, the BART objective learns representations that implicitly encode our downstream tasks of interest. For example, by selecting seven neurons from a frozen BARTSmiles, we can obtain a model having performance within two percentage points of the full fine-tuned model on task Clintox. Lastly, we show that standard attribution interpretability methods, when applied to BARTSmiles, highlight certain substructures that chemists use to explain specific properties of molecules. The code and the pretrained model are publicly available.
translated by 谷歌翻译
Document images are a ubiquitous source of data where the text is organized in a complex hierarchical structure ranging from fine granularity (e.g., words), medium granularity (e.g., regions such as paragraphs or figures), to coarse granularity (e.g., the whole page). The spatial hierarchical relationships between content at different levels of granularity are crucial for document image understanding tasks. Existing methods learn features from either word-level or region-level but fail to consider both simultaneously. Word-level models are restricted by the fact that they originate from pure-text language models, which only encode the word-level context. In contrast, region-level models attempt to encode regions corresponding to paragraphs or text blocks into a single embedding, but they perform worse with additional word-level features. To deal with these issues, we propose MGDoc, a new multi-modal multi-granular pre-training framework that encodes page-level, region-level, and word-level information at the same time. MGDoc uses a unified text-visual encoder to obtain multi-modal features across different granularities, which makes it possible to project the multi-granular features into the same hyperspace. To model the region-word correlation, we design a cross-granular attention mechanism and specific pre-training tasks for our model to reinforce the model of learning the hierarchy between regions and words. Experiments demonstrate that our proposed model can learn better features that perform well across granularities and lead to improvements in downstream tasks.
translated by 谷歌翻译
牡蛎是海洋的活真空吸尘器。由于过度收获,牡蛎人口呈指数下降。随着自动化和AI的当前发展,机器人正成为环境监测过程中不可或缺的一部分,该过程也可以用于牡蛎礁保存。然而,水下环境构成了许多困难,包括实用的危险和耗时的操作以及技术观点 - 扭曲的感知和不可靠的导航。为此,我们提出了一个模拟环境,可用于改善牡蛎礁监测。模拟环境可用于创建具有多个传感器数据和远程操作车辆(ROV)的地面真相位置的照片真实的图像数据集。当前,没有用于牡蛎礁监视的照片真实图像数据集。因此,我们希望为水下社区提供新的基准套件。
translated by 谷歌翻译
国家估计是许多机器人应用中的重要方面。在这项工作中,我们考虑通过增强状态估计算法中使用的动力学模型来获得机器人系统的准确状态估计的任务。现有的框架,例如移动视野估计(MHE)和无气味的卡尔曼过滤器(UKF),为合并非线性动力学和测量模型提供了灵活性。但是,这意味着这些算法中的动力学模型必须足够准确,以保证状态估计的准确性。为了增强动力学模型并提高估计准确性,我们利用了一个深度学习框架,称为基于知识的神经普通微分方程(KNODES)。 KNODE框架将先验知识嵌入到训练过程中,并通过将先前的第一原理模型与神经普通微分方程(NODE)模型融合来合成精确的混合模型。在我们提出的最新框架中,我们将数据驱动的模型集成到两种基于新型模型的状态估计算法中,它们表示为Knode-Mhe和Knode-UKF。在许多机器人应用中,将这两种算法与它们的常规对应物进行了比较。使用部分测量值,地面机器人的定位以及四型二次估计的状态估计。通过使用现实世界实验数据的模拟和测试,我们证明了所提出的学习增强状态估计框架的多功能性和功效。
translated by 谷歌翻译
牡蛎在海湾生活生态系统中起着关键作用,被认为是海洋的生命过滤器。近年来,牡蛎礁经过商业过度收获造成的重大破坏,需要保存以维持生态平衡。该保存的基础是估计需要准确的牡蛎检测的牡蛎密度。但是,用于准确的牡蛎检测系统需要大量数据集获得,这是水下环境中一项昂贵且劳动密集型的任务。为此,我们提出了一种新颖的方法,可以数学上对牡蛎进行建模并在模拟中渲染牡蛎的图像,以使用最小的真实数据来提高检测性能。利用我们的合成数据以及用于牡蛎检测的真实数据,与仅使用牡蛎网络仅使用真实数据相比,我们获得了高达35.1%的性能。我们还将最先进的工作提高了12.7%。这表明,使用对象的基本几何属性可以帮助成功提高有限数据集上的识别任务准确性,我们希望更多的研究人员对难以实现的数据集采用这种策略。
translated by 谷歌翻译